# Whole Word Masking Pretraining
Chinese Roberta Wwm Ext Large
Apache-2.0
A Chinese pre-trained BERT model employing whole word masking strategy, designed to accelerate Chinese natural language processing research.
Large Language Model Chinese
C
hfl
30.27k
200
Rbt4
Apache-2.0
This is a Chinese pretrained BERT model using whole word masking strategy, released by the Harbin Institute of Technology-iFLYTEK Joint Laboratory to accelerate Chinese natural language processing research.
Large Language Model Chinese
R
hfl
22
6
Rbt6
Apache-2.0
This is a retrained 6-layer RoBERTa-wwm-ext model using whole word masking technique for Chinese pretraining.
Large Language Model Chinese
R
hfl
796
9
Chinese Roberta Wwm Ext
Apache-2.0
A Chinese pretrained BERT model using whole word masking technology, designed to accelerate the development of Chinese natural language processing.
Large Language Model Chinese
C
hfl
96.54k
324
Muril Adapted Local
Apache-2.0
MuRIL is an open-source BERT model pretrained on 17 Indian languages and their transliterated versions by Google, supporting multilingual representations.
Large Language Model Supports Multiple Languages
M
monsoon-nlp
24
2
Xlm Roberta Large Qa Multilingual Finedtuned Ru
Apache-2.0
This is a pretrained model based on the XLM-RoBERTa architecture, trained with masked language modeling objectives and fine-tuned on English and Russian question answering datasets.
Question Answering System
Transformers Supports Multiple Languages

X
AlexKay
1,814
48
Featured Recommended AI Models